Does MaxEnt Overgenerate? Implicational Universals in Maximum Entropy Grammar

نویسندگان

  • Arto Anttila
  • Giorgio Magri
چکیده

It goes without saying that a good linguistic theory should neither undergenerate (i.e., it should not miss any attested patterns) nor overgenerate (i.e., it should not predict any “unattestable” patterns). Recent literature has argued that the Maximum Entropy (ME; Goldwater & Johnson 2003) framework provides a probabilistic extension of categorical Harmonic Grammar (HG; Legendre et al. 1990; Smolensky & Legendre 2006) which is rich enough to model attested patterns of variable and gradient phonology (see for instance Hayes & Wilson 2008, Zuraw & Hayes 2017, and Smith & Pater 2017). In other words, ME is rich enough to avoid undergeneration. But does ME’s richness come at the expense of overgeneration? This is the question that we would like to start investigating in this paper. In the case of a categorical phonological theory such as HG, overgeneration can be investigated directly by exhaustively listing all the grammars predicted by the theory for certain constraint and candidate sets. That is possible because the predicted typology of grammars is usually finite. The situation is rather different for probabilistic theories such as ME. In this case, the typology consists of an infinite number of probability distributions which therefore cannot be exhaustively listed and directly inspected. A more indirect strategy is needed to glance at the boundary of the probabilistic typology and thus investigate its overgeneration. A natural indirect strategy that gets around the problem raised by an infinite typology is to enumerate, not the individual languages/grammars/distributions in the typology, but the corresponding set of implicational universals predicted by the typology. An implicational universal is an implication

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Maximum entropy model parameterization with TF∗IDF weighted vector space model

Maximum entropy (MaxEnt) models have been used in many spoken language tasks. The training of a MaxEnt model often involves an iterative procedure that starts from an initial parameterization and gradually updates it towards the optimum. Due to the convexity of its objective function (hence a global optimum on a training set), little attention has been paid to model initialization in MaxEnt tra...

متن کامل

Speed Gradient and MaxEnt principle for Shannon and Tsallis entropies

The notion of entropy is widely used in modern statistical physics, thermodynamics, information theory, engineering etc. In 1948, Claude Shannon introduced his information entropy for an absolutely continuous random variable x having probability density function (pdf) p. In 1988, Constantino Tsallis introduced a generalized Shannon entropy. Tsallis entropy have found applications in various sci...

متن کامل

Maximum Entropy Production as an Inference Algorithm that Translates Physical Assumptions into Macroscopic Predictions: Don't Shoot the Messenger

Is Maximum Entropy Production (MEP) a physical principle? In this paper I tentatively suggest it is not, on the basis that MEP is equivalent to Jaynes’ Maximum Entropy (MaxEnt) inference algorithm that passively translates physical assumptions into macroscopic predictions, as applied to non-equilibrium systems. MaxEnt itself has no physical content; disagreement between MaxEnt predictions and e...

متن کامل

Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.

Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains:...

متن کامل

Maximum Entropy and Maximum Probability

Sanov’s Theorem and the Conditional Limit Theorem (CoLT) are established for a multicolor Pólya Eggenberger urn sampling scheme, giving the Pólya divergence and the Pólya extension to the Maximum Relative Entropy (MaxEnt) method. Pólya MaxEnt includes the standard MaxEnt as a special case. The universality of standard MaxEnt advocated by an axiomatic approach to inference for inverse problems i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2018